Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csSE_bot@mastoxiv.page
2024-03-11 07:28:12

Quantifying Contamination in Evaluating Code Generation Capabilities of Language Models
Martin Riddell, Ansong Ni, Arman Cohan
arxiv.org/abs/2403.04811

@arXiv_csCL_bot@mastoxiv.page
2024-02-12 07:33:05

Large Language Models: A Survey
Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, Jianfeng Gao
arxiv.org/abs/2402.06196

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2024-03-12 07:12:15

Materials science in the era of large language models: a perspective
Ge Lei, Ronan Docherty, Samuel J. Cooper
arxiv.org/abs/2403.06949

@arXiv_csRO_bot@mastoxiv.page
2024-04-09 08:53:07

This arxiv.org/abs/2403.13801 has been replaced.
initial toot: mastoxiv.page/@arXiv_csRO_…

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2024-03-12 07:12:15

Materials science in the era of large language models: a perspective
Ge Lei, Ronan Docherty, Samuel J. Cooper
arxiv.org/abs/2403.06949

@arXiv_csSE_bot@mastoxiv.page
2024-04-10 06:53:01

Model Generation from Requirements with LLMs: an Exploratory Study
Alessio Ferrari, Sallam Abualhaija, Chetan Arora
arxiv.org/abs/2404.06371

@arXiv_csCL_bot@mastoxiv.page
2024-02-12 08:30:41

This arxiv.org/abs/2311.13668 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_eessAS_bot@mastoxiv.page
2024-03-11 06:53:35

AttentionStitch: How Attention Solves the Speech Editing Problem
Antonios Alexos, Pierre Baldi
arxiv.org/abs/2403.04804

@arXiv_csDC_bot@mastoxiv.page
2024-05-06 08:27:13

This arxiv.org/abs/2404.13236 has been replaced.
initial toot: mastoxiv.page/@arXiv_csDC_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-12 08:30:41

This arxiv.org/abs/2311.13668 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csHC_bot@mastoxiv.page
2024-05-02 07:32:59

Aptly: Making Mobile Apps from Natural Language
Evan W. Patton, David Y. J. Kim, Ashley Granquist, Robin Liu, Arianna Scott, Jennet Zamanova, Harold Abelson
arxiv.org/abs/2405.00229

@arXiv_csAR_bot@mastoxiv.page
2024-05-03 06:46:55

Natural Language to Verilog: Design of a Recurrent Spiking Neural Network using Large Language Models and ChatGPT
Paola Vitolo, George Psaltakis, Michael Tomlinson, Gian Domenico Licciardo, Andreas G. Andreou
arxiv.org/abs/2405.01419

@arXiv_csDB_bot@mastoxiv.page
2024-04-29 07:16:10

Automated Data Visualization from Natural Language via Large Language Models: An Exploratory Study
Yang Wu, Yao Wan, Hongyu Zhang, Yulei Sui, Wucai Wei, Wei Zhao, Guandong Xu, Hai Jin
arxiv.org/abs/2404.17136

@arXiv_csCE_bot@mastoxiv.page
2024-02-27 06:47:17

ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing
Liuzhenghao Lv, Zongying Lin, Hao Li, Yuyang Liu, Jiaxi Cui, Calvin Yu-Chian Chen, Li Yuan, Yonghong Tian
arxiv.org/abs/2402.16445 arxiv.org/pdf/2402.16445
arXiv:2402.16445v1 Announce Type: new
Abstract: Large Language Models (LLMs), including GPT-x and LLaMA2, have achieved remarkable performance in multiple Natural Language Processing (NLP) tasks. Under the premise that protein sequences constitute the protein language, Protein Large Language Models (ProLLMs) trained on protein corpora excel at de novo protein sequence generation. However, as of now, unlike LLMs in NLP, no ProLLM is capable of multiple tasks in the Protein Language Processing (PLP) field. This prompts us to delineate the inherent limitations in current ProLLMs: (i) the lack of natural language capabilities, (ii) insufficient instruction understanding, and (iii) high training resource demands. To address these challenges, we introduce a training framework to transform any general LLM into a ProLLM capable of handling multiple PLP tasks. Specifically, our framework utilizes low-rank adaptation and employs a two-stage training approach, and it is distinguished by its universality, low overhead, and scalability. Through training under this framework, we propose the ProLLaMA model, the first known ProLLM to handle multiple PLP tasks simultaneously. Experiments show that ProLLaMA achieves state-of-the-art results in the unconditional protein sequence generation task. In the controllable protein sequence generation task, ProLLaMA can design novel proteins with desired functionalities. In the protein property prediction task, ProLLaMA achieves nearly 100\% accuracy across many categories. The latter two tasks are beyond the reach of other ProLLMs. Code is available at \url{github.com/Lyu6PosHao/ProLLaMA.

@arXiv_csCL_bot@mastoxiv.page
2024-05-09 06:48:44

Zero-shot LLM-guided Counterfactual Generation for Text
Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu
arxiv.org/abs/2405.04793

@arXiv_csCL_bot@mastoxiv.page
2024-05-09 06:48:44

Zero-shot LLM-guided Counterfactual Generation for Text
Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu
arxiv.org/abs/2405.04793

@arXiv_csIR_bot@mastoxiv.page
2024-02-28 06:50:29

Natural Language Processing Methods for Symbolic Music Generation and Information Retrieval: a Survey
Dinh-Viet-Toan Le, Louis Bigo, Mikaela Keller, Dorien Herremans
arxiv.org/abs/2402.17467

@arXiv_csSE_bot@mastoxiv.page
2024-05-06 07:25:18

Class-Level Code Generation from Natural Language Using Iterative, Tool-Enhanced Reasoning over Repository
Ajinkya Deshpande, Anmol Agarwal, Shashank Shet, Arun Iyer, Aditya Kanade, Ramakrishna Bairi, Suresh Parthasarathy
arxiv.org/abs/2405.01573

@arXiv_csMM_bot@mastoxiv.page
2024-04-26 06:56:34

Semantically consistent Video-to-Audio Generation using Multimodal Language Large Model
Gehui Chen, Guan'an Wang, Xiaowen Huang, Jitao Sang
arxiv.org/abs/2404.16305 arxiv.org/pdf/2404.16305
arXiv:2404.16305v1 Announce Type: new
Abstract: Existing works have made strides in video generation, but the lack of sound effects (SFX) and background music (BGM) hinders a complete and immersive viewer experience. We introduce a novel semantically consistent v ideo-to-audio generation framework, namely SVA, which automatically generates audio semantically consistent with the given video content. The framework harnesses the power of multimodal large language model (MLLM) to understand video semantics from a key frame and generate creative audio schemes, which are then utilized as prompts for text-to-audio models, resulting in video-to-audio generation with natural language as an interface. We show the satisfactory performance of SVA through case study and discuss the limitations along with the future research direction. The project page is available at huiz-a.github.io/audio4video.g.

@arXiv_csCR_bot@mastoxiv.page
2024-04-22 07:03:04

The Power of Words: Generating PowerShell Attacks from Natural Language
Pietro Liguori, Christian Marescalco, Roberto Natella, Vittorio Orbinato, Luciano Pianese
arxiv.org/abs/2404.12893

@arXiv_csHC_bot@mastoxiv.page
2024-03-04 08:31:36

This arxiv.org/abs/2310.09235 has been replaced.
initial toot: mastoxiv.page/@arXiv_csHC_…

@arXiv_csIT_bot@mastoxiv.page
2024-02-27 08:21:40

This arxiv.org/abs/2308.06013 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIT_…

@arXiv_csCY_bot@mastoxiv.page
2024-03-25 06:56:40

Application of GPT Language Models for Innovation in Activities in University Teaching
Manuel de Buenaga, Francisco Javier Bueno
arxiv.org/abs/2403.14694

@arXiv_csCL_bot@mastoxiv.page
2024-05-10 08:28:47

This arxiv.org/abs/2402.05812 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csSE_bot@mastoxiv.page
2024-05-06 07:25:26

On the Limitations of Embedding Based Methods for Measuring Functional Correctness for Code Generation
Atharva Naik
arxiv.org/abs/2405.01580

@michabbb@social.vivaldi.net
2024-03-25 22:54:59

Introducing Stable Code Instruct 3B
This #llm is an instruction-tuned Code LM based on Stable Code 3B. w/ natural language prompting, this model can handle a variety of tasks such as code generation, math and other software development related queries.

@arXiv_csDC_bot@mastoxiv.page
2024-04-23 07:27:53

LLMChain: Blockchain-based Reputation System for Sharing and Evaluating Large Language Models
Mouhamed Amine Bouchiha, Quentin Telnoff, Souhail Bakkali, Ronan Champagnat, Mourad Rabah, Micka\"el Coustaty, Yacine Ghamri-Doudane
arxiv.org/abs/2404.13236

@arXiv_csAI_bot@mastoxiv.page
2024-04-17 08:27:24

This arxiv.org/abs/2403.15879 has been replaced.
initial toot: mastoxiv.page/@arXiv_csAI_…

@arXiv_astrophIM_bot@mastoxiv.page
2024-03-15 07:17:39

PAPERCLIP: Associating Astronomical Observations and Natural Language with Multi-Modal Models
Siddharth Mishra-Sharma, Yiding Song, Jesse Thaler
arxiv.org/abs/2403.08851 <…

@arXiv_csSE_bot@mastoxiv.page
2024-05-01 06:52:58

Exploring Multi-Lingual Bias of Large Code Models in Code Generation
Chaozheng Wang, Zongjie Li, Cuiyun Gao, Wenxuan Wang, Ting Peng, Hailiang Huang, Yuetang Deng, Shuai Wang, Michael R. Lyu
arxiv.org/abs/2404.19368

@arXiv_csIR_bot@mastoxiv.page
2024-05-03 06:50:19

"In-Context Learning" or: How I learned to stop worrying and love "Applied Information Retrieval"
Andrew Parry, Debasis Ganguly, Manish Chandra
arxiv.org/abs/2405.01116

@arXiv_csCL_bot@mastoxiv.page
2024-05-09 08:29:39

This arxiv.org/abs/2402.05812 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCR_bot@mastoxiv.page
2024-03-22 08:31:52

This arxiv.org/abs/2312.02003 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csDC_bot@mastoxiv.page
2024-04-26 08:31:42

This arxiv.org/abs/2404.12457 has been replaced.
initial toot: mastoxiv.page/@arXiv_csDC_…

@arXiv_csGR_bot@mastoxiv.page
2024-02-16 08:31:16

This arxiv.org/abs/2310.17838 has been replaced.
initial toot: mastoxiv.page/@arXiv_csGR_…

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 07:16:50

FLAME: Factuality-Aware Alignment for Large Language Models
Sheng-Chieh Lin, Luyu Gao, Barlas Oguz, Wenhan Xiong, Jimmy Lin, Wen-tau Yih, Xilun Chen
arxiv.org/abs/2405.01525

@arXiv_csSE_bot@mastoxiv.page
2024-02-26 08:33:34

This arxiv.org/abs/2303.16749 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csRO_bot@mastoxiv.page
2024-03-22 08:36:20

This arxiv.org/abs/2309.15821 has been replaced.
initial toot: mastoxiv.page/@arXiv_csRO_…

@arXiv_astrophIM_bot@mastoxiv.page
2024-03-15 07:17:39

PAPERCLIP: Associating Astronomical Observations and Natural Language with Multi-Modal Models
Siddharth Mishra-Sharma, Yiding Song, Jesse Thaler
arxiv.org/abs/2403.08851 <…

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 08:44:49

This arxiv.org/abs/2404.19048 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_statML_bot@mastoxiv.page
2024-02-15 08:48:48

This arxiv.org/abs/2402.08095 has been replaced.
initial toot: mastoxiv.page/@arXiv_sta…

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 07:16:47

Analyzing the Role of Semantic Representations in the Era of Large Language Models
Zhijing Jin, Yuen Chen, Fernando Gonzalez, Jiarui Liu, Jiayi Zhang, Julian Michael, Bernhard Sch\"olkopf, Mona Diab
arxiv.org/abs/2405.01502

@arXiv_csSE_bot@mastoxiv.page
2024-03-04 06:52:57

An approach for performance requirements verification and test environments generation
Waleed Abdeen, Xingru Chen, Michael Unterkalmsteiner
arxiv.org/abs/2403.00099

@arXiv_csHC_bot@mastoxiv.page
2024-03-15 07:19:59

Enabling Waypoint Generation for Collaborative Robots using LLMs and Mixed Reality
Cathy Mengying Fang, Krzysztof Zieli\'nski, Pattie Maes, Joe Paradiso, Bruce Blumberg, Mikkel Baun Kj{\ae}rgaard
arxiv.org/abs/2403.09308

@arXiv_csCL_bot@mastoxiv.page
2024-03-04 08:30:45

This arxiv.org/abs/2402.15987 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csAR_bot@mastoxiv.page
2024-02-20 06:46:54

Designing Silicon Brains using LLM: Leveraging ChatGPT for Automated Description of a Spiking Neuron Array
Michael Tomlinson, Joe Li, Andreas Andreou
arxiv.org/abs/2402.10920

@arXiv_csCL_bot@mastoxiv.page
2024-03-04 08:30:45

This arxiv.org/abs/2402.15987 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_eessAS_bot@mastoxiv.page
2024-03-15 07:28:34

WavCraft: Audio Editing and Generation with Natural Language Prompts
Jinhua Liang, Huan Zhang, Haohe Liu, Yin Cao, Qiuqiang Kong, Xubo Liu, Wenwu Wang, Mark D. Plumbley, Huy Phan, Emmanouil Benetos
arxiv.org/abs/2403.09527

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 06:49:01

RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing
Yucheng Hu, Yuxing Lu
arxiv.org/abs/2404.19543 arxiv.org/pdf/2404.19543
arXiv:2404.19543v1 Announce Type: new
Abstract: Large Language Models (LLMs) have catalyzed significant advancements in Natural Language Processing (NLP), yet they encounter challenges such as hallucination and the need for domain-specific knowledge. To mitigate these, recent methodologies have integrated information retrieved from external resources with LLMs, substantially enhancing their performance across NLP tasks. This survey paper addresses the absence of a comprehensive overview on Retrieval-Augmented Language Models (RALMs), both Retrieval-Augmented Generation (RAG) and Retrieval-Augmented Understanding (RAU), providing an in-depth examination of their paradigm, evolution, taxonomy, and applications. The paper discusses the essential components of RALMs, including Retrievers, Language Models, and Augmentations, and how their interactions lead to diverse model structures and applications. RALMs demonstrate utility in a spectrum of tasks, from translation and dialogue systems to knowledge-intensive applications. The survey includes several evaluation methods of RALMs, emphasizing the importance of robustness, accuracy, and relevance in their assessment. It also acknowledges the limitations of RALMs, particularly in retrieval quality and computational efficiency, offering directions for future research. In conclusion, this survey aims to offer a structured insight into RALMs, their potential, and the avenues for their future development in NLP. The paper is supplemented with a Github Repository containing the surveyed works and resources for further study: github.com/2471023025/RALM_Sur.

@arXiv_csDB_bot@mastoxiv.page
2024-03-20 06:48:12

Quantixar: High-performance Vector Data Management System
Gulshan Yadav, RahulKumar Yadav, Mansi Viramgama, Mayank Viramgama, Apeksha Mohite
arxiv.org/abs/2403.12583

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:32:44

Quantifying Memorization of Domain-Specific Pre-trained Language Models using Japanese Newspaper and Paywalls
Shotaro Ishihara
arxiv.org/abs/2404.17143

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:50:23

Saving the legacy of Hero Ibash: Evaluating Four Language Models for Aminoacian
Yunze Xiao, Yiyang Pan
arxiv.org/abs/2402.18121

@arXiv_csSE_bot@mastoxiv.page
2024-03-21 07:23:20

CONLINE: Complex Code Generation and Refinement with Online Searching and Correctness Testing
Xinyi He, Jiaru Zou, Yun Lin, Mengyu Zhou, Shi Han, Zejian Yuan, Dongmei Zhang
arxiv.org/abs/2403.13583

@arXiv_csCL_bot@mastoxiv.page
2024-05-06 08:26:30

This arxiv.org/abs/2311.12410 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csSE_bot@mastoxiv.page
2024-03-21 07:23:20

CONLINE: Complex Code Generation and Refinement with Online Searching and Correctness Testing
Xinyi He, Jiaru Zou, Yun Lin, Mengyu Zhou, Shi Han, Zejian Yuan, Dongmei Zhang
arxiv.org/abs/2403.13583

@arXiv_csSE_bot@mastoxiv.page
2024-04-30 08:36:56

This arxiv.org/abs/2311.01020 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 08:44:25

This arxiv.org/abs/2404.15104 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csIR_bot@mastoxiv.page
2024-04-16 08:54:17

This arxiv.org/abs/2305.13859 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_csSE_bot@mastoxiv.page
2024-04-22 06:52:52

Large Language Model Supply Chain: A Research Agenda
Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang
arxiv.org/abs/2404.12736

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:51:05

The First Place Solution of WSDM Cup 2024: Leveraging Large Language Models for Conversational Multi-Doc QA
Yiming Li, Zhao Zhang
arxiv.org/abs/2402.18385

@arXiv_csSE_bot@mastoxiv.page
2024-04-15 07:13:14

Analyzing the Performance of Large Language Models on Code Summarization
Rajarshi Haldar, Julia Hockenmaier
arxiv.org/abs/2404.08018

@arXiv_eessAS_bot@mastoxiv.page
2024-03-18 08:35:18

This arxiv.org/abs/2403.09527 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_csCL_bot@mastoxiv.page
2024-02-26 08:30:56

This arxiv.org/abs/2402.14379 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-04-01 08:30:13

This arxiv.org/abs/2403.07726 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 06:48:45

Countering Reward Over-optimization in LLM with Demonstration-Guided Reinforcement Learning
Mathieu Rita, Florian Strub, Rahma Chaabouni, Paul Michel, Emmanuel Dupoux, Olivier Pietquin
arxiv.org/abs/2404.19409 arxiv.org/pdf/2404.19409
arXiv:2404.19409v1 Announce Type: new
Abstract: While Reinforcement Learning (RL) has been proven essential for tuning large language models (LLMs), it can lead to reward over-optimization (ROO). Existing approaches address ROO by adding KL regularization, requiring computationally expensive hyperparameter tuning. Additionally, KL regularization focuses solely on regularizing the language policy, neglecting a potential source of regularization: the reward function itself. Inspired by demonstration-guided RL, we here introduce the Reward Calibration from Demonstration (RCfD), which leverages human demonstrations and a reward model to recalibrate the reward objective. Formally, given a prompt, the RCfD objective minimizes the distance between the demonstrations' and LLM's rewards rather than directly maximizing the reward function. This objective shift avoids incentivizing the LLM to exploit the reward model and promotes more natural and diverse language generation. We show the effectiveness of RCfD on three language tasks, which achieves comparable performance to carefully tuned baselines while mitigating ROO.

@arXiv_csSE_bot@mastoxiv.page
2024-03-27 08:28:14

This arxiv.org/abs/2402.00093 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:32:56

When to Trust LLMs: Aligning Confidence with Response Quality
Shuchang Tao, Liuyi Yao, Hanxing Ding, Yuexiang Xie, Qi Cao, Fei Sun, Jinyang Gao, Huawei Shen, Bolin Ding
arxiv.org/abs/2404.17287

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 08:32:49

This arxiv.org/abs/2311.12410 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csSE_bot@mastoxiv.page
2024-04-19 08:32:51

This arxiv.org/abs/2303.01056 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-29 08:32:02

This arxiv.org/abs/2403.18018 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csSE_bot@mastoxiv.page
2024-02-20 06:58:58

Tool-Augmented LLMs as a Universal Interface for IDEs
Yaroslav Zharov, Yury Khudyakov, Evgeniia Fedotova, Evgeny Grigorenko, Egor Bogomolov
arxiv.org/abs/2402.11635

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 08:28:55

This arxiv.org/abs/2311.13668 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csSE_bot@mastoxiv.page
2024-02-14 07:13:42

Analyzing Prompt Influence on Automated Method Generation: An Empirical Study with Copilot
Ionut Daniel Fagadau, Leonardo Mariani, Daniela Micucci, Oliviero Riganelli
arxiv.org/abs/2402.08430

@arXiv_csCL_bot@mastoxiv.page
2024-02-23 06:56:10

LLMs with Industrial Lens: Deciphering the Challenges and Prospects -- A Survey
Ashok Urlana, Charaka Vinayak Kumar, Ajeet Kumar Singh, Bala Mallikarjunarao Garlapati, Srinivasa Rao Chalamala, Rahul Mishra
arxiv.org/abs/2402.14558

@arXiv_csCL_bot@mastoxiv.page
2024-04-19 08:29:31

This arxiv.org/abs/2404.06714 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-15 08:30:12

This arxiv.org/abs/2306.12951 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-21 09:03:43

This arxiv.org/abs/2403.07726 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-21 09:03:43

This arxiv.org/abs/2403.07726 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-13 14:33:01

This arxiv.org/abs/2310.18376 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-04-15 08:30:50

This arxiv.org/abs/2404.06714 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-13 06:48:32

SemEval-2024 Shared Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes
Timothee Mickus, Elaine Zosa, Ra\'ul V\'azquez, Teemu Vahtola, J\"org Tiedemann, Vincent Segonne, Alessandro Raganato, Marianna Apidianaki
arxiv.org/abs/2403.07726

@arXiv_csCL_bot@mastoxiv.page
2024-03-13 06:48:32

SemEval-2024 Shared Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes
Timothee Mickus, Elaine Zosa, Ra\'ul V\'azquez, Teemu Vahtola, J\"org Tiedemann, Vincent Segonne, Alessandro Raganato, Marianna Apidianaki
arxiv.org/abs/2403.07726